Calculating Adversarial Risk from Attack Trees: Control Strength and Probabilistic Attackers

نویسندگان

  • Wolter Pieters
  • Mohsen Davarynejad
چکیده

Attack trees are a well-known formalism for quantitative analysis of cyber attacks consisting of multiple steps and alternative paths. It is possible to derive properties of the overall attacks from properties of individual steps, such as cost for the attacker and probability of success. However, in existing formalisms, such properties are considered independent. For example, investing more in an attack step would not increase the probability of success. As this seems counterintuitive, we introduce a framework for reasoning about attack trees based on the notion of control strength, annotating nodes with a function from attacker investment to probability of success. Calculation rules on such trees are defined to enable analysis of optimal attacker investment. Our second result consists of the translation of optimal attacker investment into the associated adversarial risk, yielding what we call adversarial risk trees. The third result is the introduction of probabilistic attacker strategies, based on the fitness (utility) of available scenarios. Together these contributions improve the possibilities for using attack trees in adversarial risk analysis.

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

A Security Verification Framework for SysML Activity Diagrams

A Security Verification Framework for SysML Activity Diagrams Samir Ouchani Concordia University, 2013 UML and SysML play a central role in modern software and systems engineering. They are considered as the de facto standard for modeling software and systems. Today’s systems are created from a myriad of interacting parts that are combined to produce visible behavior. The main difficulty arises...

متن کامل

Adversarial Machine Learning at Scale

Adversarial examples are malicious inputs designed to fool machine learning models. They often transfer from one model to another, allowing attackers to mount black box attacks without knowledge of the target model’s parameters. Adversarial training is the process of explicitly training a model on adversarial examples, in order to make it more robust to attack or to reduce its test error on cle...

متن کامل

Rational Choice of Security Measures Via Multi-parameter Attack Trees

We present a simple risk-analysis based method for studying the security of institutions against rational (gain-oriented) attacks. Our method uses a certain refined form of attack-trees that are used to estimate the cost and the success probability of attacks. We use elementary game theory to decide whether the system under protection is a realistic target for gain-oriented attackers. Attacks a...

متن کامل

How probabilistic risk assessment can mislead terrorism risk analysts.

Traditional probabilistic risk assessment (PRA), of the type originally developed for engineered systems, is still proposed for terrorism risk analysis. We show that such PRA applications are unjustified in general. The capacity of terrorists to seek and use information and to actively research different attack options before deciding what to do raises unique features of terrorism risk assessme...

متن کامل

Improving Transferability of Adversarial Examples with Input Diversity

Though convolutional neural networks have achieved stateof-the-art performance on various vision tasks, they are extremely vulnerable to adversarial examples, which are obtained by adding humanimperceptible perturbations to the original images. Adversarial examples can thus be used as an useful tool to evaluate and select the most robust models in safety-critical applications. However, most of ...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2014